Goto

Collaborating Authors

 center loss


Kinship Verification through a Forest Neural Network

Nazari, Ali, Moghaddam, Mohsen Ebrahimi, Borzoei, Omidreza

arXiv.org Artificial Intelligence

Early methods used face representations in kinship verification, which are less accurate than joint representations of parents' and children's facial images learned from scratch. We propose an approach featuring graph neural network concepts to utilize face representations and have comparable results to joint representation algorithms. Moreover, we designed the structure of the classification module and introduced a new combination of losses to engage the center loss gradually in training our network. Additionally, we conducted experiments on KinFaceW-I and II, demonstrating the effectiveness of our approach. We achieved the best result on KinFaceW-II, an average improvement of nearly 1.6 for all kinship types, and we were near the best on KinFaceW-I. The code is available at https://github.com/ali-nazari/Kinship-Verification


learning discriminative features from spectrograms using center loss for speech emotion recognition

Dai, Dongyang, Wu, Zhiyong, Li, Runnan, Wu, Xixin, Jia, Jia, Meng, Helen

arXiv.org Artificial Intelligence

Identifying the emotional state from speech is essential for the natural interaction of the machine with the speaker. However, extracting effective features for emotion recognition is difficult, as emotions are ambiguous. We propose a novel approach to learn discriminative features from variable length spectrograms for emotion recognition by cooperating softmax cross-entropy loss and center loss together. The softmax cross-entropy loss enables features from different emotion categories separable, and center loss efficiently pulls the features belonging to the same emotion category to their center. By combining the two losses together, the discriminative power will be highly enhanced, which leads to network learning more effective features for emotion recognition. As demonstrated by the experimental results, after introducing center loss, both the unweighted accuracy and weighted accuracy are improved by over 3\% on Mel-spectrogram input, and more than 4\% on Short Time Fourier Transform spectrogram input.


Large Margin Discriminative Loss for Classification

Nguyen, Hai-Vy, Gamboa, Fabrice, Zhang, Sixin, Chhaibi, Reda, Gratton, Serge, Giaccone, Thierry

arXiv.org Machine Learning

In this paper, we introduce a novel discriminative loss function with large margin in the context of Deep Learning. This loss boosts the discriminative power of neural nets, represented by intra-class compactness and inter-class separability. On the one hand, the class compactness is ensured by close distance of samples of the same class to each other. On the other hand, the inter-class separability is boosted by a margin loss that ensures the minimum distance of each class to its closest boundary. All the terms in our loss have an explicit meaning, giving a direct view of the feature space obtained. We analyze mathematically the relation between compactness and margin term, giving a guideline about the impact of the hyper-parameters on the learned features. Moreover, we also analyze properties of the gradient of the loss with respect to the parameters of the neural net. Based on this, we design a strategy called partial momentum updating that enjoys simultaneously stability and consistency in training. Furthermore, we also investigate generalization errors to have better theoretical insights. Our loss function systematically boosts the test accuracy of models compared to the standard softmax loss in our experiments.


Searching a Lightweight Network Architecture for Thermal Infrared Pedestrian Tracking

Gao, Peng, Liu, Xiao, Wang, Yu, Yuan, Ru-Yue

arXiv.org Artificial Intelligence

Manually-designed network architectures for thermal infrared pedestrian tracking (TIR-PT) require substantial effort from human experts. Neural networks with ResNet backbones are popular for TIR-PT. However, TIR-PT is a tracking task and more challenging than classification and detection. This paper makes an early attempt to search an optimal network architecture for TIR-PT automatically, employing single-bottom and dual-bottom cells as basic search units and incorporating eight operation candidates within the search space. To expedite the search process, a random channel selection strategy is employed prior to assessing operation candidates. Classification, batch hard triplet, and center loss are jointly used to retrain the searched architecture. The outcome is a high-performance network architecture that is both parameter- and computation-efficient. Extensive experiments proved the effectiveness of the automated method.


Enhancing Motor Imagery Decoding in Brain Computer Interfaces using Riemann Tangent Space Mapping and Cross Frequency Coupling

Xiong, Xiong, Su, Li, Huang, Jinguo, Kang, Guixia

arXiv.org Artificial Intelligence

Objective: Motor Imagery (MI) serves as a crucial experimental paradigm within the realm of Brain Computer Interfaces (BCIs), aiming to decoding motor intentions from electroencephalogram (EEG) signals. Method: Drawing inspiration from Riemannian geometry and Cross-Frequency Coupling (CFC), this paper introduces a novel approach termed Riemann Tangent Space Mapping using Dichotomous Filter Bank with Convolutional Neural Network (DFBRTS) to enhance the representation quality and decoding capability pertaining to MI features. DFBRTS first initiates the process by meticulously filtering EEG signals through a Dichotomous Filter Bank, structured in the fashion of a complete binary tree. Subsequently, it employs Riemann Tangent Space Mapping to extract salient EEG signal features within each sub-band. Finally, a lightweight convolutional neural network is employed for further feature extraction and classification, operating under the joint supervision of cross-entropy and center loss. To validate the efficacy, extensive experiments were conducted using DFBRTS on two well-established benchmark datasets: the BCI competition IV 2a (BCIC-IV-2a) dataset and the OpenBMI dataset. The performance of DFBRTS was benchmarked against several state-of-the-art MI decoding methods, alongside other Riemannian geometry-based MI decoding approaches. Results: DFBRTS significantly outperforms other MI decoding algorithms on both datasets, achieving a remarkable classification accuracy of 78.16% for four-class and 71.58% for two-class hold-out classification, as compared to the existing benchmarks.


Learning Robust Self-attention Features for Speech Emotion Recognition with Label-adaptive Mixup

Kang, Lei, Zhang, Lichao, Jiang, Dazhi

arXiv.org Artificial Intelligence

Speech Emotion Recognition (SER) is to recognize human emotions in a natural verbal interaction scenario with machines, which is considered as a challenging problem due to the ambiguous human emotions. Despite the recent progress in SER, state-of-the-art models struggle to achieve a satisfactory performance. We propose a self-attention based method with combined use of label-adaptive mixup and center loss. By adapting label probabilities in mixup and fitting center loss to the mixup training scheme, our proposed method achieves a superior performance to the state-of-the-art methods.


Incremental Class Learning using Variational Autoencoders with Similarity Learning

Huo, Jiahao, van Zyl, Terence L.

arXiv.org Artificial Intelligence

Catastrophic forgetting in neural networks during incremental learning remains a challenging problem. Previous research investigated catastrophic forgetting in fully connected networks, with some earlier work exploring activation functions and learning algorithms. Applications of neural networks have been extended to include similarity learning. Understanding how similarity learning loss functions would be affected by catastrophic forgetting is of significant interest. Our research investigates catastrophic forgetting for four well-known similarity-based loss functions during incremental class learning. The loss functions are Angular, Contrastive, Center, and Triplet loss. Our results show that the catastrophic forgetting rate differs across loss functions on multiple datasets. The Angular loss was least affected, followed by Contrastive, Triplet loss, and Center loss with good mining techniques. We implemented three existing incremental learning techniques, iCaRL, EWC, and EBLL. We further proposed a novel technique using Variational Autoencoders (VAEs) to generate representation as exemplars passed through the network's intermediate layers. Our method outperformed three existing state-of-the-art techniques. We show that one does not require stored images (exemplars) for incremental learning with similarity learning. The generated representations from VAEs help preserve regions of the embedding space used by prior knowledge so that new knowledge does not ``overwrite'' it.


Triplet Loss-less Center Loss Sampling Strategies in Facial Expression Recognition Scenarios

Rajoli, Hossein, Lotfi, Fatemeh, Atyabi, Adham, Afghah, Fatemeh

arXiv.org Artificial Intelligence

Facial expressions convey massive information and play a crucial role in emotional expression. Deep neural network (DNN) accompanied by deep metric learning (DML) techniques boost the discriminative ability of the model in facial expression recognition (FER) applications. DNN, equipped with only classification loss functions such as Cross-Entropy cannot compact intra-class feature variation or separate inter-class feature distance as well as when it gets fortified by a DML supporting loss item. The triplet center loss (TCL) function is applied on all dimensions of the sample's embedding in the embedding space. In our work, we developed three strategies: fully-synthesized, semi-synthesized, and prediction-based negative sample selection strategies. To achieve better results, we introduce a selective attention module that provides a combination of pixel-wise and element-wise attention coefficients using high-semantic deep features of input samples. We evaluated the proposed method on the RAF-DB, a highly imbalanced dataset. The experimental results reveal significant improvements in comparison to the baseline for all three negative sample selection strategies.


Maximally Compact and Separated Features with Regular Polytope Networks

Pernici, Federico, Bruni, Matteo, Baecchi, Claudio, Del Bimbo, Alberto

arXiv.org Artificial Intelligence

Convolutional Neural Networks (CNNs) trained with the Softmax loss are widely used classification models for several vision tasks. Typically, a learnable transformation (i.e. the classifier) is placed at the end of such models returning class scores that are further normalized into probabilities by Softmax. This learnable transformation has a fundamental role in determining the network internal feature representation. In this work we show how to extract from CNNs features with the properties of \emph{maximum} inter-class separability and \emph{maximum} intra-class compactness by setting the parameters of the classifier transformation as not trainable (i.e. fixed). We obtain features similar to what can be obtained with the well-known ``Center Loss'' \cite{wen2016discriminative} and other similar approaches but with several practical advantages including maximal exploitation of the available feature space representation, reduction in the number of network parameters, no need to use other auxiliary losses besides the Softmax. Our approach unifies and generalizes into a common approach two apparently different classes of methods regarding: discriminative features, pioneered by the Center Loss \cite{wen2016discriminative} and fixed classifiers, firstly evaluated in \cite{hoffer2018fix}. Preliminary qualitative experimental results provide some insight on the potentialities of our combined strategy.


A Novel Automatic Modulation Classification Scheme Based on Multi-Scale Networks

Zhang, Hao, Zhou, Fuhui, Wu, Qihui, Wu, Wei, Hu, Rose Qingyang

arXiv.org Artificial Intelligence

Automatic modulation classification enables intelligent communications and it is of crucial importance in today's and future wireless communication networks. Although many automatic modulation classification schemes have been proposed, they cannot tackle the intra-class diversity problem caused by the dynamic changes of the wireless communication environment. In order to overcome this problem, inspired by face recognition, a novel automatic modulation classification scheme is proposed by using the multi-scale network in this paper. Moreover, a novel loss function that combines the center loss and the cross entropy loss is exploited to learn both discriminative and separable features in order to further improve the classification performance. Extensive simulation results demonstrate that our proposed automatic modulation classification scheme can achieve better performance than the benchmark schemes in terms of the classification accuracy. The influence of the network parameters and the loss function with the two-stage training strategy on the classification accuracy of our proposed scheme are investigated. H. Zhang, F. Zhou, and Qihui Wu are with College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing 211106 China. They are also with Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space (Nanjing University of Aeronautics and Astronautics), and with Ministry of Industry and Information Technology, Nanjing, 211106, China (email: haozhangcn@nuaa.edu.cn, W. Wu is with the Nanjing University of Posts and Telecommunications, Nanjing 210003, China, also with the Key Laboratory of Dynamic Cognitive System of Electromagnetic Spectrum Space, Ministry of Industry and Information Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China (e-mail: weiwu@njupt.edu.cn). R. Q. Hu is with the Department of Electrical and Computer Engineering, Utah State University, Logan, UT 84322 USA (e-mail: rosehu@ieee.org). ITH the commercial applications of the fifth generation (5G) wireless communication networks, the sixth generation (6G) wireless communication networks have received an increasing attention from both academia and industry [1] and [2]. Intelligent communication empowered by artificial intelligence is one of the most evident characteristics of 6G wireless communication systems. To realize this goal, it is of crucial importance to automatically recognize the modulation types [3].